Google Reports Escalating ’Distillation Attacks’ on Gemini AI
Google disclosed a surge in sophisticated "distillation attacks" targeting its Gemini AI chatbot, with bad actors systematically probing the system to reverse-engineer its proprietary technology. The company's Threat Intelligence Group identified global perpetrators—primarily private firms and researchers seeking to bypass the exorbitant costs of AI development.
These attacks mirror last year's allegations by OpenAI against Chinese firm DeepSeek, accused of model theft via distillation techniques. Italy and Ireland subsequently banned DeepSeek, highlighting the geopolitical tensions surrounding AI intellectual property. The financial incentive is clear: replicating cutting-edge AI models through distillation costs pennies compared to the billions required for original development.
As AI systems remain vulnerable to such exploits—despite defensive measures—the incident underscores the fragile security of conversational AI platforms accessible to any internet user. Smaller companies deploying custom AI tools should anticipate similar targeting, according to Google's warning.